Recently Dwarkesh Patel released an interview with François Chollet (hereafter Dwarkesh and François). I thought this was one of Dwarkesh's best recent podcasts, and one of the best discussions that the AI Community has had recently. Instead of subtweeting those...
i. What is futarchy
Futarchy is a proposed system of governance that combines prediction markets with democratic decision-making. Developed by Robin Hanson (who is at my university!), it improves policy outcomes by harnessing the efficiency of markets. Under futarchy, citizens would vote on desired outcomes or metrics of success, but not on specific policies. Instead, prediction markets would be used to determine which policies are most likely to achieve the voted-upon goals. Traders in these markets would bet on the expected outcomes of different policy options, with the policies predicted to be most successful being automatically implemented. You vote on goals, but bet on beliefs.
What I want to draw your attention to is an unintended, but beneficial, side effect of such a system. Policy will reflect the utility functions of the people, even if they are non-linear. Indeed, if there...
Pure Earth is a GiveWell grantee that works to reduce lead and mercury exposure. In an August 2023 post, they provided a "preliminary analysis" suggesting that their lead reduction program in Bangladesh "can avert an...
Hi! I’m Nico and I’m on the research team at Founders Pledge. We noticed that the way we compare current to future income benefits is in tension with how we compare income benefits across interventions. However, aligning these two comparisons—choosing the same function ...
Yup, that's an accurate summary of my beliefs (with the caveat that is non-critical and can be replaced with a constant or whatever else you want; only is essential). Put another way, is a single preference parameter that determines the marginal utility of income, and that affects how we value both income and health. I think any other assumption leads to internal inconsistency, or doesn't represent utility maximization.
...Does that sound right? If so, my view would be that valuing an extra life year according to for some is a funct
The Simple Macroeconomics of AI is a 2024 working paper by Daron Acemoglu which models the economic growth effects of AI and predicts them to be small: About a .06% increase in TFP growth annually. This stands in contrast to many predictions which forecast immense impacts...
I think EAs who consume economics research are accustomed to the challenges of interpreting applied microeconomic research: causal inference challenges and such. But I don't think they are accustomed to interpreting structural models critically, which is going to become more of a problem as structural models of AI and economic growth become more common. The most common failure mode for interpreting structural research is to not recognize model-concept mismatch. It looks something like this:
Some helpful thoughts on this are here.
I think that the evidence price-taste-convenience hypothesis is unfortunately fairly weak given available evidence, for what it is worth. This analysis and this analysis are, I think, the best write ups on this.
In discussions (both online and in-person) about applicant experience in hiring rounds, I've heard repeatedly that applicants want feedback. Giving in-depth feedback is costly (and risky), but here is an example I have received that strikes me as low-cost and low-risk. I've tweaked it a little to make it more of a template.
"Based on your [resume/application form/work sample], our team thinks you're a potential fit and would like to invite you to the next step of the application process: a [STEP]. You are being asked to complete [STEP] because you are currently in the top 20% of all applicants."
The phrasing "you are currently in the top 20% of all applicants" is nice. I like that. I haven't ever seen that before, but I think it is something that EA organizations (or hiring teams at any organization) could easily adapt and use in many hiring rounds. While you don't always know exactly what percentile a candidate falls into, you can give broad/vague information, such as being in the top X%. It is a way to give a small amount of feedback to candidates without requiring a large amount of time/effort and without taking on legal risk.
I was chatting recently to someone who had difficulty knowing how to orient to x-risk work (despite being a respected professional working in that field). They expressed that they didn't find it motivating at a gut level in the same way they did with poverty or animal stuff...
Noting another recent post doing this: https://forum.effectivealtruism.org/posts/RbCnvWyoiDFQccccj/on-the-dwarkesh-chollet-podcast-and-the-cruxes-of-scaling-to
I think my results are probably SOTA based on more recent updates.
I feel like this is a pretty strange way to draw the line about what counts as an "LLM solution".
Consider the following simplified dialogue as an example of why I don't think this is a natural place to draw the line:
Human skeptic: Humans don't exhibit real intelligence. You see, they'll never do something as... (read more)